Let us start by a few reminders about population dynamics. The basis of population dynamics is exponential growth \(\frac{dN(t)}{dt} = rN\). You can also note that this corresponds to a constant per capita growth rate (fitness of the average individual) \(\frac{1}{N} \frac{dN(t)}{dt} = r\). By putting terms in \(N\) on one side and terms in \(t\) on the other, one can show that \(N(t) = N(0) \exp(rt)\) or even \(N(t+1) = N(t) \exp(r)\), often noted \(N_{t+1} = N_t e^{r}\).
What on Earth could this highly theoretical construct have to do with time series, you might think? Well, here you go.
Let’s denote the log-abundance \(x_t = \log(N_t)\). Now we have \(x_{t+1} = x_t + r\). As it turns out, the stochastic model with environmental stochasticity has log-normal noise, so on the log-scale it is simply \(x_{t+1} = x_t + r + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2)\). This is the simplest model that can be fitted to time series data, and mathematically speaking this is a biased random walk with mean \(r\) (i.e., it is more likely to go up than down). An \(r\) close to zero will indicate relative stability.
This model has all sorts of interesting properties due to the stochastic part, such as the fact that the probability of hitting zero, including with very positive \(r\), can be very large if the variance is large. Let’s simulate and fit that model.
set.seed(42)
r=0.1
tmax = 50
x=x1=rep(0,tmax)
for (t in 1:(tmax-1)){x[t+1] = x[t] + r + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x1[t+1] = x1[t] + r + rnorm(1,0,sqrt(0.25))}
minx=min(c(x,x1))
maxx=max(c(x,x1))
par(mfrow=c(1,2),pch=20)
plot(1:tmax,x,type="o",ylim=c(minx,maxx))
lines(1:tmax,x1,type="o",col="blue")
plot(1:tmax,exp(x),type="o",ylim=c(exp(minx),exp(maxx)))
lines(1:tmax,exp(x1),type="o",col="blue")
Data in a list:
m10.data <- list(x=x, tmax = tmax)
m11.data <- list(x=x1, tmax = tmax)
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r;
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(x[t] + r,sigma); // likelihood
}
}
fit.m1.0 <- sampling(m1.1, data = m10.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 7b7f79be40c122a68bde684fd2c580e1.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.09 0.00 0.05 0.03 0.09 0.15 743 1
## sigma 0.36 0.00 0.04 0.32 0.36 0.41 804 1
## lp__ 19.62 0.04 0.86 18.45 19.86 20.46 549 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:23:40 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
Keeping in mind that \(\sqrt{0.1} = 0.31\) and\(\sqrt{0.25} = 0.5\). Let’s do that with another dataset
fit.m1.1 <- sampling(m1.1, data = m11.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 7b7f79be40c122a68bde684fd2c580e1.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.15 0.00 0.07 0.07 0.15 0.24 821 1
## sigma 0.46 0.00 0.05 0.40 0.45 0.52 534 1
## lp__ 6.89 0.05 1.02 5.49 7.19 7.82 371 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:23:42 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
And again with two other datasets, to see if could have some bias here (or if it is just low precision).
x2=x3=rep(0,tmax)
for (t in 1:(tmax-1)){x2[t+1] = x2[t] + r + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x3[t+1] = x3[t] + r + rnorm(1,0,sqrt(0.1))}
m12.data <- list(x=x2, tmax = tmax)
m13.data <- list(x=x3, tmax = tmax)
fit.m1.2<- sampling(m1.1, data = m12.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 7b7f79be40c122a68bde684fd2c580e1.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.02 0.00 0.04 -0.03 0.02 0.07 715 1
## sigma 0.28 0.00 0.03 0.25 0.28 0.32 773 1
## lp__ 32.80 0.05 1.13 31.30 33.14 33.75 528 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:23:44 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
fit.m1.3<- sampling(m1.1, data = m13.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 7b7f79be40c122a68bde684fd2c580e1.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.10 0.00 0.05 0.05 0.10 0.16 491 1
## sigma 0.31 0.00 0.03 0.27 0.31 0.35 593 1
## lp__ 28.31 0.05 1.05 26.90 28.68 29.24 412 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:23:45 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
There does not seem to be bias on \(\sigma\). Still uniform priors for the SD are often suggested to avoid issues. Let’s try that.
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r;
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
sigma ~ uniform(0,1); //1 is a high variance, we're much below.
for (t in 1:(tmax-1)){
x[t+1] ~ normal(x[t] + r,sigma); // likelihood
}
}
We now fit the model to two datasets having \(\sigma = \sqrt{0.1} = 0.31\) and one having \(\sigma = \sqrt{0.25} = 0.5\)
fit.m1.4<- sampling(m1.2, data = m10.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.4, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 390cc8e1b9683651faf840126b65cd1c.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.09 0.00 0.05 0.02 0.08 0.16 828 1.01
## sigma 0.38 0.00 0.04 0.33 0.37 0.43 832 1.00
## lp__ 23.20 0.04 0.95 21.88 23.50 24.09 563 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:24:29 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
fit.m1.5<- sampling(m1.2, data = m12.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.5, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 390cc8e1b9683651faf840126b65cd1c.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.03 0.00 0.04 -0.03 0.03 0.08 858 1
## sigma 0.29 0.00 0.03 0.26 0.29 0.33 504 1
## lp__ 35.67 0.05 0.99 34.38 35.97 36.53 415 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:24:30 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
### for the last one we take the sigma = 0.5 parameter set
fit.m1.6<- sampling(m1.2, data = m11.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.6, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 390cc8e1b9683651faf840126b65cd1c.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.15 0.00 0.07 0.06 0.15 0.24 607 1
## sigma 0.48 0.00 0.05 0.42 0.47 0.55 922 1
## lp__ 11.47 0.05 1.00 10.07 11.76 12.39 403 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:24:30 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
Let’s investigate the correlations then. All good.
mcmc_pairs(as.array(fit.m1.4), np = nuts_params(fit.m1.4),
pars = c("r","sigma"),
off_diag_args = list(size = 1, alpha = 1/3),
np_style = pairs_style_np(div_size=3, div_shape = 19))
What have we learned so far?
mean(m10.data$x[2:tmax]-m10.data$x[1:(tmax-1)]) #realized mean growth rate dataset m10
## [1] 0.08425807
mean(m12.data$x[2:tmax]-m12.data$x[1:(tmax-1)]) #realized mean growth rate dataset m12
## [1] 0.02281295
minx=min(c(m10.data$x,m12.data$x))
maxx=max(c(m10.data$x,m12.data$x))
par(pch=20)
plot(1:tmax,m10.data$x,type="o",ylim=c(minx,maxx),ylab="log(N)",xlab="Time")
lines(1:tmax,m12.data$x,type="o",col="blue")
You might be shocked that the next chapter is not logistic growth? Well, from a statistical perspective:
\[N_{t+1} = N_t e^{a - \beta \log(N_t) + \epsilon_t}, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2). \]
which is rigorously equivalent to
\[N_{t+1} = N_t e^{a} \frac{1}{N_t^{\beta}} \times \exp( \epsilon_t ). \]
Usually this blows people’s minds, we have to recall that \(\beta \log(N_t) = \log(N_t^\beta)\) so that \(\exp(\beta\log(N_t)) = \exp(\log(N_t^\beta)) = N_t^\beta\). One of the important ideas about that model is that \(\beta<1\) (usually) will mediate the effect of individuals at low vs high densities; as population size increases individuals have less of a negative effect. It is also extraordinarily statistically convenient, because this equation is equivalent to
\[x_{t+1} = x_t + a - \beta x_t + \epsilon_t = a + b x_t + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2).\]
Convenient, isn’t it? Now we can use this versatile model to test whether population growth slows down when density increases, just by fitting a linear time series model. This is called the AR(1) model – AutoRegressive model of order 1. It has a very famous continuous-time counterpart called the Ornstein-Uhlenbeck process, which has physical origins; you can think of it as a ball that’s attracted to center with a pull strength proportional to the distance to the center. In a deterministic setting, the ball converges to the center (\(x=a/(1-b)=a/\beta\) here). But random perturbations (\(\epsilon_t\)) make it oscillate around the center.
There are many applications and multivariate generalisations of this simple model in aquatic ecology, including fisheries. For much more on this an state-space models (in Stan as well), see the course by Eli Holmes, Eric Ward and Mark Sheuerell. Here we will only cover the basic Gompertz growth, with only one state, the process state (no observation error).
a=1
b=0.8 # this is 1-beta, so the "pull strength" beta = 0.2 here (and if b=1, we have a random walk)
tmax = 50
x=x1=rep(0,tmax)
for (t in 1:(tmax-1)){x[t+1] = a + b*x[t] + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x1[t+1] = a + b*x[t] + rnorm(1,0,sqrt(0.25))}
minx=min(c(x,x1))
maxx=max(c(x,x1))
par(mfrow=c(1,2),pch=20)
plot(1:tmax,x,type="o",ylim=c(minx,maxx))
lines(1:tmax,x1,type="o",col="blue")
plot(1:tmax,exp(x),type="o",ylim=c(exp(minx),exp(maxx)))
lines(1:tmax,exp(x1),type="o",col="blue")
Now let’s fit that model
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real a; //growth rate when log(N) = 0, not a true max with N=density
real b;
real<lower=0> sigma;
}
model {
//priors
a ~ normal(0,1);
b ~ normal(0,1);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(a+b*x[t],sigma); // likelihood
}
}
m20.data <- list(x=x, tmax = tmax)
m21.data <- list(x=x1, tmax = tmax)
fit.m2.0 <- sampling(m2.1, data = m20.data, iter = 1000, chains = 2, cores = 2)
print(fit.m2.0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: ba53cedb4bd64042c7d0d2056a834e67.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## a 1.33 0.01 0.18 1.09 1.33 1.56 438 1.01
## b 0.73 0.00 0.04 0.68 0.73 0.78 436 1.01
## sigma 0.27 0.00 0.03 0.24 0.27 0.31 567 1.00
## lp__ 33.20 0.06 1.21 31.57 33.47 34.42 366 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:25:17 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
Keeping in mind that \(\sqrt{0.1} = 0.31\) and\(\sqrt{0.25} = 0.5\). Let’s do that with another dataset
fit.m2.1 <- sampling(m2.1, data = m21.data, iter = 1000, chains = 2, cores = 2)
print(fit.m2.1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: ba53cedb4bd64042c7d0d2056a834e67.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## a 1.82 0.02 0.37 1.35 1.84 2.27 339 1.01
## b 0.62 0.00 0.08 0.53 0.62 0.71 333 1.01
## sigma 0.64 0.00 0.06 0.57 0.64 0.72 400 1.00
## lp__ -14.22 0.07 1.24 -15.87 -13.89 -13.04 340 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:25:20 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
Let’s investigate the correlations.
mcmc_pairs(as.array(fit.m2.0), np = nuts_params(fit.m2.0),
pars = c("a","b","sigma"),
off_diag_args = list(size = 1, alpha = 1/3),
np_style = pairs_style_np(div_size=3, div_shape = 19))
Now, one of the ugly problems of population dynamics start rearing up its head. There’s a correlation between \(a\) and \(b\) (it’s not a dramatic one here, but it is annoying). It’s not just something peculiar to that dataset, see the other one.
mcmc_pairs(as.array(fit.m2.1), np = nuts_params(fit.m2.1),
pars = c("a","b","sigma"),
off_diag_args = list(size = 1, alpha = 1/3),
np_style = pairs_style_np(div_size=3, div_shape = 19))
So, how to fix this? Most fixes involve some kind of re-parametrization or an informative prior (or in some cases, adding covariates to parameter). In that particular case, there’s a very easy fix. We often use that model close to equilibrium, so we can use the transformation \(\tilde{x} = x - \bar{x}\) to study only deviations of log-abundance to the mean log-abundance. This transforms the model into
\[ \tilde{x}_{t+1} = b \tilde{x}_t + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2).\]
Parameter \(a\) is then eliminated from the centered version of the model and we get rid of the problem.
[Future step: Adding covariates on the population growth rate with this model? May solve some identifiability problems while revealing potentially others. ]
Now we come to this revered, almost two-centuries old model. In continuous-time, \(\frac{dN(t)}{dt} = rN(1-\frac{N}{K}) = rN- \gamma N^2\). It says that individual fitness declines linearly with population size (by contrast, in the Gompertz pop. model it declined logarithmically with population size). We could fit directly this model as an ODE, using special tools to do that. But, unlike many other ODE models this one can be explicitely integrated (using some algebra) and one can derive that
\[ N(t) = \frac{ N(0) e^{rt}}{1+ \frac{N(0)}{K} (e^{rt}-1) }\]
or equivalently
\[ N(t+1) = \frac{ N(t) e^{r}}{1+ \frac{N(t)}{K} (e^{r}-1) }\]
which is usually denoted
\[ N_{t+1} = \frac{ N(t) R}{1+\alpha N(t) }\] with the correspondance \(R= e^r\) and competition coefficient \(\alpha = \frac{e^{r}-1}{K}\) (instead of \(\gamma = \frac{r}{K}\) in continuous-time). The discrete-time form is usually preferred, because most data we can have access is sampled from year to year or some other regular interval in time. And it can be fitted to data like the previous Gompertz model (although not by linear regression, since the model is nonlinear even when written with \(x=\log(N)\)).
This model is rather popular in plant ecology nowadays, although usually called Beverton-Holt because of the origin of this functional form in fisheries science. We can introduce environmental stochasticity on the growth rate:
\[ N_{t+1} = \frac{ N(t) e^{r+\epsilon_t}}{1+\alpha N(t) }.\] Sometimes people write
\[ N(t+1) = \frac{ N(t) e^{r+\epsilon_t}}{1+ \frac{N(t)}{K'} }\] but \(K'\) is a “false carrying capacity” here, in the sense that the true equilibrium pop size \(N^* = K'(e^r-1)\) which can be much higher for large \(r\).
Let’s now simulate that model and fit it, with different parameterizations. We will try
We consider two values of \(r\) (0.1 and 2) and two levels of \(\sigma^2\), 0.05 and 0.5.
set.seed(42)
alpha=0.1
Kprim=1/alpha #Kprim=10
tmax=50
####Let's start with r=0.1
r=0.1
R=exp(r) #1.0105
K=(exp(r)-1)/alpha #K=1.05
N_BH=N_BH1=rep(NA,tmax)
N_BH[1]=N_BH1[1]=1
for (t in 1:(tmax-1)){N_BH[t+1] = (exp(r+rnorm(1,0,sqrt(0.05))))*N_BH[t]/(1+alpha*N_BH[t])}
for (t in 1:(tmax-1)){N_BH1[t+1] = (exp(r+rnorm(1,0,sqrt(0.5))))*N_BH1[t]/(1+alpha*N_BH1[t])}
###
par(mfrow=c(1,2),pch=20)
plot(1:tmax,N_BH,type="o",ylim=range(c(N_BH,N_BH1)),main="r=0.1")
lines(1:tmax,N_BH1,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")
####And go on with r=2
r=2
R=exp(r) #7.4
K=(exp(r)-1)/alpha #K=63.9
N_BH.2=N_BH1.2=rep(NA,tmax)
N_BH.2[1]=N_BH1.2[1]=1
for (t in 1:(tmax-1)){N_BH.2[t+1] = (exp(r+rnorm(1,0,sqrt(0.05))))*N_BH.2[t]/(1+alpha*N_BH.2[t])}
for (t in 1:(tmax-1)){N_BH1.2[t+1] = (exp(r+rnorm(1,0,sqrt(0.5))))*N_BH1.2[t]/(1+alpha*N_BH1.2[t])}
###
plot(1:tmax,N_BH.2,type="o",ylim=range(c(N_BH.2,N_BH1.2)),main="r=2")
lines(1:tmax,N_BH1.2,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")
m30.data <- list(x=log(N_BH), tmax = tmax)
m31.data <- list(x=log(N_BH1), tmax = tmax)
m30.2.data <- list(x=log(N_BH.2), tmax = tmax)
m31.2.data <- list(x=log(N_BH1.2), tmax = tmax)
store_results=matrix(NA,nrow=12,ncol=5,dimnames=list(c("d0.m1","d1.m1","d0.2.m1","d1.2.m1","d0.m2","d1.m2","d0.2.m2","d1.2.m2","d0.m3","d1.m3","d0.2.m3","d1.2.m3"),c("r","alpha","sigma","cor(r,DD)","likelihood")))
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variable
}
parameters { // unobserved parameters
real r; // growth rate
real<lower=0> alpha; // density-dependence
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
alpha ~ exponential(10);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(r+x[t]-log(1+alpha*exp(x[t])),sigma);
}
}
One might ask: why is the model written like this? Well, let’s use \(x=\log(N)\) then the BH model reads
\[x_{t+1} = \log \left(\frac{N_t e^{r+\epsilon_t}}{1+\alpha N_t} \right) = x_t + r + \epsilon_t - \log(1 + \alpha e ^{x_t})\].
How do the 4 parameterizations work with this structure of the model?
fit.m3.1_0 <- sampling(m3.1, data = m30.data, iter = 1000, chains = 2, cores = 2)
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m3.1_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: c2f1c1064990ad96e28bef576c569b68.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.06 0.00 0.06 -0.02 0.05 0.13 329 1.00
## alpha 0.07 0.00 0.05 0.01 0.06 0.13 336 1.00
## sigma 0.26 0.00 0.03 0.23 0.26 0.30 309 1.01
## lp__ 33.23 0.13 1.51 31.08 33.63 34.66 146 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:26:08 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.1_0)
store_results["d0.m1","r"]=mean(tmp$r)
store_results["d0.m1","alpha"]=mean(tmp$alpha)
store_results["d0.m1","sigma"]=mean(tmp$sigma)
store_results["d0.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d0.m1","likelihood"]=mean(tmp$lp__)
Let’s use the inferred parameters for another simulation and compare to the actual dataset.
alpha_simu=mean(as.matrix(fit.m3.1_0,pars="alpha"))
r_simu=mean(as.matrix(fit.m3.1_0,pars="r"))
sigma_simu=mean(as.matrix(fit.m3.1_0,pars="sigma"))
N_simu=rep(NA,tmax)
N_simu[1]=1
for (t in 1:(tmax-1)){N_simu[t+1] = (exp(r_simu+rnorm(1,0,sigma_simu)))*N_simu[t]/(1+alpha_simu*N_simu[t])}
plot(1:tmax,N_simu,t="o",xlab="Time",ylim=range(c(N_simu,N_BH)))
lines(1:tmax,N_BH,col="blue")
mcmc_pairs(as.array(fit.m3.1_0), np = nuts_params(fit.m3.1_0),
pars = c("r","alpha","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m3.1_1 <- sampling(m3.1, data = m31.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.1_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: c2f1c1064990ad96e28bef576c569b68.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.27 0.01 0.16 0.07 0.26 0.48 426 1
## alpha 0.17 0.00 0.09 0.06 0.15 0.29 412 1
## sigma 0.63 0.00 0.06 0.56 0.63 0.71 459 1
## lp__ -14.98 0.08 1.36 -16.79 -14.64 -13.65 260 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:26:11 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.1_1)
store_results["d1.m1","r"]=mean(tmp$r)
store_results["d1.m1","alpha"]=mean(tmp$alpha)
store_results["d1.m1","sigma"]=mean(tmp$sigma)
store_results["d1.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d1.m1","likelihood"]=mean(tmp$lp__)
alpha_simu=mean(as.matrix(fit.m3.1_1,pars="alpha"))
r_simu=mean(as.matrix(fit.m3.1_1,pars="r"))
sigma_simu=mean(as.matrix(fit.m3.1_1,pars="sigma"))
N_simu=rep(NA,tmax)
N_simu[1]=1
for (t in 1:(tmax-1)){N_simu[t+1] = (exp(r_simu+rnorm(1,0,sigma_simu)))*N_simu[t]/(1+alpha_simu*N_simu[t])}
plot(1:tmax,N_simu,t="o",xlab="Time",ylim=range(c(N_simu,N_BH1)))
lines(1:tmax,N_BH1,col="blue")
mcmc_pairs(as.array(fit.m3.1_1), np = nuts_params(fit.m3.1_1),
pars = c("r","alpha","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m3.1_0.2 <- sampling(m3.1, data = m30.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.1_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: c2f1c1064990ad96e28bef576c569b68.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 2.04 0.01 0.18 1.80 2.04 2.27 346 1.00
## alpha 0.11 0.00 0.02 0.08 0.11 0.14 340 1.00
## sigma 0.21 0.00 0.02 0.18 0.21 0.24 306 1.01
## lp__ 42.83 0.07 1.22 41.21 43.16 44.06 317 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:26:15 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.1_0.2)
store_results["d0.2.m1","r"]=mean(tmp$r)
store_results["d0.2.m1","alpha"]=mean(tmp$alpha)
store_results["d0.2.m1","sigma"]=mean(tmp$sigma)
store_results["d0.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d0.2.m1","likelihood"]=mean(tmp$lp__)
fit.m3.1_1.2 <- sampling(m3.1, data = m31.2.data, iter = 1000, chains = 2, cores = 2)# 2 divergent transitions, leaving that here but checking pairs
print(fit.m3.1_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: c2f1c1064990ad96e28bef576c569b68.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 2.01 0.02 0.35 1.58 1.99 2.47 271 1.01
## alpha 0.12 0.00 0.05 0.07 0.11 0.19 298 1.01
## sigma 0.59 0.00 0.06 0.52 0.59 0.67 433 1.01
## lp__ -12.97 0.07 1.26 -14.56 -12.64 -11.74 356 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:26:17 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.1_1.2)
store_results["d1.2.m1","r"]=mean(tmp$r)
store_results["d1.2.m1","alpha"]=mean(tmp$alpha)
store_results["d1.2.m1","sigma"]=mean(tmp$sigma)
store_results["d1.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d1.2.m1","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.1_1.2), np = nuts_params(fit.m3.1_1.2),
pars = c("r","alpha","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r; // growth rate
real<lower=0> K; // carrying capacity
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
K ~ exponential(1); //exponential(10) not working well
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(r+x[t]-log(1+(exp(r)-1)*exp(x[t])/K),sigma);
}
}
fit.m3.2_0 <- sampling(m3.2, data = m30.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #65 divergent transitions without correcting adapt_delta
print(fit.m3.2_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 3a2bb0de4c57cb10036e548415ba5584.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.06 0.00 0.07 -0.01 0.06 0.15 386 1.01
## K 1.36 0.04 0.84 0.57 1.16 2.41 413 1.00
## sigma 0.26 0.00 0.03 0.23 0.26 0.29 456 1.00
## lp__ 35.72 0.07 1.23 34.17 36.01 36.95 295 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:44:39 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.2_0)
store_results["d0.m2","r"]=mean(tmp$r)
K=mean(tmp$K)
alpha=(exp(mean(tmp$r))-1)/K
store_results["d0.m2","alpha"]=alpha
store_results["d0.m2","sigma"]=mean(tmp$sigma)
store_results["d0.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d0.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_0), np = nuts_params(fit.m3.2_0),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m3.2_1 <- sampling(m3.2, data = m31.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #97 divergent transitions without correcting adapt_delta
## Warning: There were 2 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m3.2_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 3a2bb0de4c57cb10036e548415ba5584.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.34 0.01 0.19 0.09 0.33 0.60 289 1.01
## K 1.78 0.03 0.61 1.00 1.77 2.59 373 1.00
## sigma 0.63 0.00 0.06 0.56 0.63 0.71 459 1.00
## lp__ -12.57 0.06 1.12 -14.05 -12.32 -11.39 340 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:44:42 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.2_1)
K=mean(tmp$K)
alpha=(exp(mean(tmp$r))-1)/K
store_results["d1.m2","r"]=mean(tmp$r)
store_results["d1.m2","alpha"]=alpha
store_results["d1.m2","sigma"]=mean(tmp$sigma)
store_results["d1.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d1.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_1), np = nuts_params(fit.m3.2_1),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m3.2_0.2 <- sampling(m3.2, data = m30.2.data, iter = 1000, chains = 2, cores = 2,control=list(adapt_delta=0.999)) #92 divergent transitions without correcting adapt_delta
## Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
print(fit.m3.2_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 3a2bb0de4c57cb10036e548415ba5584.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.00 0.00 0.00 0.00 0.00 0.01 266 1
## K 2.01 0.10 1.39 0.51 1.66 3.98 186 1
## sigma 0.47 0.00 0.05 0.40 0.46 0.53 308 1
## lp__ 5.01 0.08 1.37 3.06 5.38 6.37 271 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:44:46 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.2_0.2)
K=mean(tmp$K)
alpha=(exp(mean(tmp$r))-1)/K
store_results["d0.2.m2","r"]=mean(tmp$r)
store_results["d0.2.m2","alpha"]=alpha
store_results["d0.2.m2","sigma"]=mean(tmp$sigma)
store_results["d0.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d0.2.m2","likelihood"]=mean(tmp$lp__)
fit.m3.2_1.2 <- sampling(m3.2, data = m31.2.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #128 divergent transitions without correcting
## Warning: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m3.2_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 3a2bb0de4c57cb10036e548415ba5584.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.01 0.00 0.01 0.00 0.01 0.02 182 1.01
## K 2.19 0.10 1.53 0.54 1.86 4.24 245 1.01
## sigma 0.84 0.00 0.08 0.75 0.84 0.94 303 1.00
## lp__ -30.25 0.08 1.25 -32.04 -29.92 -29.01 242 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:44:49 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.2_1.2)
K=mean(tmp$K)
alpha=(exp(mean(tmp$r))-1)/K
store_results["d1.2.m2","r"]=mean(tmp$r)
store_results["d1.2.m2","alpha"]=alpha
store_results["d1.2.m2","sigma"]=mean(tmp$sigma)
store_results["d1.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d1.2.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_1.2), np = nuts_params(fit.m3.2_1.2),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r; // growth rate
real<lower=0> Kprim; // proxy for carrying capacity
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
Kprim ~ exponential(10);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(r+x[t]-log(1+exp(x[t])/Kprim),sigma);
}
}
fit.m3.3_0 <- sampling(m3.3, data = m30.data, iter = 1000, chains = 2, cores = 2)
print(fit.m3.3_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 0703f69430b9e3ce71b1082a61a01f9d.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.53 0.01 0.10 0.41 0.53 0.66 328 1
## Kprim 1.46 0.02 0.32 1.06 1.43 1.89 397 1
## sigma 0.33 0.00 0.04 0.28 0.33 0.38 420 1
## lp__ 9.69 0.06 1.11 8.29 9.94 10.81 406 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:27:59 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.3_0)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d0.m3","r"]=mean(tmp$r)
store_results["d0.m3","alpha"]=alpha
store_results["d0.m3","sigma"]=mean(tmp$sigma)
store_results["d0.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d0.m3","likelihood"]=mean(tmp$lp__)
fit.m3.3_1 <- sampling(m3.3, data = m31.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.3_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 0703f69430b9e3ce71b1082a61a01f9d.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 1.36 0.01 0.26 1.03 1.35 1.71 368 1
## Kprim 0.67 0.01 0.23 0.39 0.64 0.98 438 1
## sigma 0.70 0.00 0.07 0.61 0.69 0.79 286 1
## lp__ -25.32 0.07 1.23 -26.93 -25.00 -24.10 296 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:02 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.3_1)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d1.m3","r"]=mean(tmp$r)
store_results["d1.m3","alpha"]=alpha
store_results["d1.m3","sigma"]=mean(tmp$sigma)
store_results["d1.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d1.m3","likelihood"]=mean(tmp$lp__)
fit.m3.3_0.2 <- sampling(m3.3, data = m30.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
print(fit.m3.3_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 0703f69430b9e3ce71b1082a61a01f9d.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 3.76 0.02 0.29 3.42 3.75 4.11 191 1.01
## Kprim 1.46 0.03 0.42 0.96 1.41 1.99 189 1.01
## sigma 0.28 0.00 0.03 0.24 0.28 0.32 287 1.01
## lp__ 11.29 0.06 1.18 9.87 11.50 12.49 373 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:04 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.3_0.2)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d0.2.m3","r"]=mean(tmp$r)
store_results["d0.2.m3","alpha"]=alpha
store_results["d0.2.m3","sigma"]=mean(tmp$sigma)
store_results["d0.2.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d0.2.m3","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.3_0.2), np = nuts_params(fit.m3.3_0.2),
pars = c("r","Kprim","sigma"),
off_diag_args = list(size = 1,alpha=1),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m3.3_1.2 <- sampling(m3.3, data = m31.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.3_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 0703f69430b9e3ce71b1082a61a01f9d.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 4.37 0.02 0.36 3.91 4.36 4.81 405 1
## Kprim 0.72 0.01 0.26 0.43 0.67 1.07 414 1
## sigma 0.62 0.00 0.06 0.54 0.62 0.71 269 1
## lp__ -27.66 0.07 1.19 -29.29 -27.37 -26.44 278 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:08 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m3.3_1.2)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d1.2.m3","r"]=mean(tmp$r)
store_results["d1.2.m3","alpha"]=alpha
store_results["d1.2.m3","sigma"]=mean(tmp$sigma)
store_results["d1.2.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d1.2.m3","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.3_1.2), np = nuts_params(fit.m3.3_1.2),
pars = c("r","Kprim","sigma"),
off_diag_args = list(size = 1,alpha=1/3),
np_style = pairs_style_np(div_size=2, div_shape = 19))
Keeping in mind that r=0.1, r.2=2, \(\alpha=0.1\), \(\sigma_0=0.22=\sqrt{0.05}\) and \(\sigma_1=0.71=\sqrt{0.5}\).
| r | alpha | sigma | cor(r,DD) | likelihood | |
|---|---|---|---|---|---|
| (r,alpha),r=0.1,sigma=0.2 | 0.0558 | 0.0670 | 0.2610 | 0.7754 | 33.2291 |
| (r,alpha),r=0.1,sigma=0.7 | 0.2677 | 0.1655 | 0.6313 | 0.8077 | -14.9754 |
| (r,alpha),r=2,sigma=0.2 | 2.0362 | 0.1103 | 0.2093 | 0.9762 | 42.8273 |
| (r,alpha),r=2,sigma=0.7 | 2.0074 | 0.1206 | 0.5914 | 0.9358 | -12.9748 |
| (r,K),r=0.1,sigma=0.2 | 0.0615 | 0.2875 | 0.2603 | -0.2257 | 35.7203 |
| (r,K),r=0.1,sigma=0.7 | 0.3411 | 0.2901 | 0.6331 | 0.5173 | -12.5680 |
| (r,K),r=2,sigma=0.2 | 0.0017 | 0.1836 | 0.4659 | 0.4713 | 5.0130 |
| (r,K),r=2,sigma=0.7 | 0.0098 | 0.1693 | 0.8409 | 0.7924 | -30.2546 |
| (r,Kprim),r=0.1,sigma=0.2 | 0.5306 | 0.6873 | 0.3343 | -0.8677 | 9.6877 |
| (r,Kprim),r=0.1,sigma=0.7 | 1.3583 | 1.4845 | 0.6968 | -0.8886 | -25.3223 |
| (r,Kprim),r=2,sigma=0.2 | 3.7584 | 0.6830 | 0.2823 | -0.9655 | 11.2934 |
| (r,Kprim),r=2,sigma=0.7 | 4.3683 | 1.3937 | 0.6249 | -0.9388 | -27.6605 |
Fairly different qualities of fit and correlations between parameters. Note FB: From previous fits I had, I would have expected that \((r,K)\) could a bit better in some cases than \((r,K')\) and \((r,\alpha)\). Notably in terms of reducing the \((r,K)\) correlation. But the estimation of the density-dependence was really worse with \((r,K)\) here, for reasons that are not entirely apparent yet. This is difficult to sort out without considering several algorithm, i.e., doing this in a frequentist setting as well to get a good look at the likelihood surfaces.
We will consider here
Back to the basics of logistic growth in discrete time. Unlike what you will find in a number of textbooks, equations like \(N_{t+1} = r' N_t (1 - N_t/K)\) are poor analogues to logistic growth (this one is called the logistic map, and although it has become famous for allowing all kinds of dynamical behaviour including chaos, it has the poor taste of allowing negative values).
A slightly better-behaved model, still quite far from the original logistic growth though, is the Ricker model \(N_{t+1} = N_t \exp(r(1 - N_t/K))\). The Ricker model has the good taste of allowing only positive values, and it can be derived from mechanistic individual based models as well. There are several such derivations (see Kisdi and Geritz and Brannstorm and Sumpter), but a very simple one comes from fish. Let’s say that fish produces larvae. The production process is a stochastic exponential growth \(e^{r+\epsilon_t}\). Then each larvae survives with a probability \(\exp(-\alpha N_t)\) that declines exponentially with \(N_t\), reflecting that once there are already many fishes around eating larvae, probability cannot decline so much more. Now we have \(N_{t+1} = N_t e^{r+\epsilon_t -\alpha N_t}\), which is the same Ricker model. You can think of the Ricker model as a logistic model with a delay in regulation that is the consequence of the discretization of time.
Here we will also try the two different parameterization, but we will vary \(r\) some more (adding \(r=3.5\)) to make the model exhibit varied dynamics.
set.seed(42)
alpha=0.1
tmax=50
####Let's start with r=0.1
r=0.1
K=r/alpha #K=1.
N_R=N_R1=rep(NA,tmax)
N_R[1]=N_R1[1]=1
for (t in 1:(tmax-1)){N_R[t+1] = (N_R[t]*exp(r-alpha*N_R[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1[t+1] = (N_R1[t]*exp(r-alpha*N_R1[t]+rnorm(1,0,sqrt(0.5))))}
###
par(mfrow=c(2,2),pch=20)
plot(1:tmax,N_R,type="o",ylim=range(c(N_R,N_R1)),xlab="Time",main="r=0.1")
lines(1:tmax,N_R1,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")
####And go on with r=2
r=2
K=r/alpha #K=20
N_R.2=N_R1.2=rep(NA,tmax)
N_R.2[1]=N_R1.2[1]=1
for (t in 1:(tmax-1)){N_R.2[t+1] = (N_R.2[t]*exp(r-alpha*N_R.2[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1.2[t+1] = (N_R1.2[t]*exp(r-alpha*N_R1.2[t]+rnorm(1,0,sqrt(0.5))))}
###
plot(1:tmax,N_R.2,type="o",ylim=range(c(N_R.2,N_R1.2)),xlab="Time",main="r=2")
lines(1:tmax,N_R1.2,type="o",col="blue")
#And r=3.5
r=3.5
K=r/alpha #K=35
N_R.3=N_R1.3=rep(NA,tmax)
N_R.3[1]=N_R1.3[1]=1
for (t in 1:(tmax-1)){N_R.3[t+1] = (N_R.3[t]*exp(r-alpha*N_R.3[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1.3[t+1] = (N_R1.3[t]*exp(r-alpha*N_R1.3[t]+rnorm(1,0,sqrt(0.5))))}
###
plot(1:tmax,N_R.3,type="o",ylim=range(c(N_R.3,N_R1.3)),xlab="Time",main="r=3")
lines(1:tmax,N_R1.3,type="o",col="blue")
m40.data <- list(x=log(N_R), tmax = tmax)
m41.data <- list(x=log(N_R1), tmax = tmax)
m40.2.data <- list(x=log(N_R.2), tmax = tmax)
m41.2.data <- list(x=log(N_R1.2), tmax = tmax)
m40.3.data <- list(x=log(N_R.3), tmax = tmax)
m41.3.data <- list(x=log(N_R1.3), tmax = tmax)
store_results=matrix(NA,nrow=12,ncol=5,dimnames=list(c("d0.m1","d1.m1","d0.2.m1","d1.2.m1","d0.3.m1","d1.3.m1","d0.m2","d1.m2","d0.2.m2","d1.2.m2","d0.3.m2","d1.3.m2"),c("r","alpha","sigma","cor(r,DD)","likelihood")))
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r; // growth rate
real<lower=0> alpha; // density-dependence
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
alpha ~ exponential(10);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(r+x[t]-alpha*exp(x[t]),sigma);
}
}
fit.m4.1_0 <- sampling(m4.1, data = m40.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.05 0.00 0.06 -0.02 0.05 0.12 333 1.00
## alpha 0.06 0.00 0.04 0.01 0.06 0.12 289 1.00
## sigma 0.26 0.00 0.03 0.23 0.26 0.29 597 1.00
## lp__ 33.31 0.07 1.33 31.60 33.61 34.67 333 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:52 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0)
tmp_mean=lapply(tmp,mean)
store_results["d0.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1 <- sampling(m4.1, data = m41.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1 , probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.24 0.01 0.14 0.07 0.24 0.41 389 1.01
## alpha 0.13 0.00 0.05 0.07 0.13 0.20 382 1.01
## sigma 0.64 0.00 0.06 0.57 0.63 0.72 306 1.01
## lp__ -14.89 0.07 1.29 -16.70 -14.55 -13.62 341 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:55 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1)
tmp_mean=lapply(tmp,mean)
store_results["d1.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_0.2 <- sampling(m4.1, data = m40.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0.2 , probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 2.08 0.00 0.08 1.98 2.08 2.18 352 1.00
## alpha 0.11 0.00 0.00 0.10 0.11 0.11 307 1.00
## sigma 0.20 0.00 0.02 0.18 0.20 0.23 276 1.01
## lp__ 44.13 0.08 1.39 42.41 44.47 45.48 304 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:28:57 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0.2)
tmp_mean=lapply(tmp,mean)
store_results["d0.2.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1.2 <- sampling(m4.1, data = m41.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 1.86 0.01 0.13 1.69 1.86 2.04 583 1.00
## alpha 0.10 0.00 0.01 0.09 0.10 0.10 502 1.00
## sigma 0.60 0.00 0.06 0.53 0.60 0.67 725 1.00
## lp__ -12.93 0.07 1.28 -14.70 -12.56 -11.62 368 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:29:00 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1.2)
tmp_mean=lapply(tmp,mean)
store_results["d1.2.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_0.3 <- sampling(m4.1, data = m40.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 3.50 0.00 0.04 3.45 3.50 3.55 716 1.00
## alpha 0.10 0.00 0.00 0.10 0.10 0.10 624 1.00
## sigma 0.22 0.00 0.02 0.20 0.22 0.25 401 1.00
## lp__ 36.22 0.08 1.36 34.51 36.56 37.57 315 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:29:02 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0.3)
tmp_mean=lapply(tmp,mean)
store_results["d0.3.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.3.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1.3 <- sampling(m4.1, data = m41.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 75899dbbd0f3524333e19c42af37418f.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 3.49 0.00 0.12 3.33 3.49 3.65 627 1.00
## alpha 0.10 0.00 0.00 0.10 0.10 0.10 708 1.01
## sigma 0.74 0.00 0.07 0.66 0.74 0.84 637 1.00
## lp__ -30.16 0.06 1.27 -31.73 -29.82 -28.90 459 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:29:04 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1.3)
tmp_mean=lapply(tmp,mean)
store_results["d1.3.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.3.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
data { // observed variables
int<lower=1> tmax; // number of observations
vector[tmax] x; // state variables
}
parameters { // unobserved parameters
real r; // growth rate
real<lower=0> K; // density-dependence
real<lower=0> sigma;
}
model {
//priors
r ~ normal(0,1);
K ~ exponential(10);
sigma ~ exponential(10);
for (t in 1:(tmax-1)){
x[t+1] ~ normal(x[t]+r*(1-exp(x[t])/K),sigma);
}
}
fit.m4.2_0 <- sampling(m4.2, data = m40.data, iter = 1000, chains = 2, cores = 2,control=list(adapt_delta=0.999))
## Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.01 0.00 0.01 0.00 0.01 0.03 98 1.02
## K 0.23 0.02 0.17 0.06 0.19 0.46 95 1.02
## sigma 0.26 0.00 0.03 0.23 0.26 0.30 326 1.00
## lp__ 32.90 0.16 1.49 30.87 33.29 34.28 89 1.02
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:29:48 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0)
tmp_mean=lapply(tmp,mean)
store_results["d0.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
fit.m4.2_1 <- sampling(m4.2, data = m41.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999))
## Warning: There were 5 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m4.2_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.02 0.00 0.02 0.00 0.01 0.04 259 1.00
## K 0.23 0.01 0.16 0.06 0.19 0.43 260 1.01
## sigma 0.65 0.00 0.06 0.57 0.65 0.73 405 1.01
## lp__ -16.76 0.07 1.21 -18.44 -16.44 -15.52 282 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:29:51 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1)
tmp_mean=lapply(tmp,mean)
store_results["d1.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_1), np = nuts_params(fit.m4.2_1),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m4.2_0.2 <- sampling(m4.2, data = m40.2.data, iter = 10000, chains = 2, cores = 2,control=list(adapt_delta=0.999))
## Warning: There were 24 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m4.2_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=10000; warmup=5000; thin=1;
## post-warmup draws per chain=5000, total post-warmup draws=10000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.00 0.00 0.00 0.00 0.00 0.01 1986 1
## K 0.22 0.00 0.15 0.06 0.18 0.42 1970 1
## sigma 0.81 0.00 0.08 0.72 0.81 0.91 3573 1
## lp__ -29.99 0.03 1.33 -31.84 -29.63 -28.69 2106 1
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:30:00 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0.2)
tmp_mean=lapply(tmp,mean)
store_results["d0.2.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_0.2), np = nuts_params(fit.m4.2_0.2),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1),
np_style = pairs_style_np(div_size=2, div_shape = 19))
fit.m4.2_1.2 <- sampling(m4.2, data = m41.2.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999))
## Warning: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.01 0.00 0.01 0.00 0.01 0.02 143 1.00
## K 0.23 0.01 0.17 0.06 0.19 0.46 153 1.00
## sigma 1.29 0.01 0.12 1.14 1.28 1.44 336 1.00
## lp__ -59.53 0.11 1.45 -61.23 -59.12 -58.19 166 1.01
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:30:06 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1.2)
tmp_mean=lapply(tmp,mean)
store_results["d1.2.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_1.2), np = nuts_params(fit.m4.2_1.2),
pars = c("r","K","sigma"),
off_diag_args = list(size = 1,alpha=1),
np_style = pairs_style_np(div_size=2, div_shape = 19))
Here, K is negative, which would imply a positive effect of density-dependence. This could be corrected in the model definition in stan.
fit.m4.2_0.3 <- sampling(m4.2, data = m40.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: There were 11 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_0.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.01 0.00 0.01 0.00 0.01 0.03 213 1.01
## K 0.24 0.01 0.16 0.09 0.20 0.43 213 1.01
## sigma 2.29 0.01 0.18 2.07 2.28 2.50 337 1.00
## lp__ -101.80 0.09 1.40 -103.35 -101.43 -100.59 221 1.00
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:58:21 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0.3)
tmp_mean=lapply(tmp,mean)
store_results["d0.3.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.3.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
fit.m4.2_1.3 <- sampling(m4.2, data = m41.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: There were 18 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: d96ec9a672872441c4e75e1f125d7a42.
## 2 chains, each with iter=1000; warmup=500; thin=1;
## post-warmup draws per chain=500, total post-warmup draws=1000.
##
## mean se_mean sd 10% 50% 90% n_eff Rhat
## r 0.02 0.00 0.01 0.01 0.02 0.04 134 1.01
## K 0.27 0.01 0.16 0.11 0.22 0.49 133 1.01
## sigma 2.62 0.02 0.22 2.36 2.61 2.90 162 1.03
## lp__ -114.28 0.12 1.44 -116.41 -113.87 -112.87 149 1.02
##
## Samples were drawn using NUTS(diag_e) at Mon May 4 11:58:24 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1.3)
tmp_mean=lapply(tmp,mean)
store_results["d1.3.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.3.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
| r | alpha | sigma | cor(r,DD) | likelihood | |
|---|---|---|---|---|---|
| (r,alpha),r=0.1,sigma=0.2 | 4.9e-02 | 6.2e-02 | 2.6e-01 | 7.3e-01 | 3.3e+01 |
| (r,alpha),r=0.1,sigma=0.7 | 2.4e-01 | 1.3e-01 | 6.4e-01 | 7.5e-01 | -1.5e+01 |
| (r,alpha),r=2,sigma=0.2 | 2.1e+00 | 1.1e-01 | 2.0e-01 | 9.2e-01 | 4.4e+01 |
| (r,alpha),r=2,sigma=0.7 | 1.9e+00 | 9.7e-02 | 6.0e-01 | 7.7e-01 | -1.3e+01 |
| (r,K),r=0.1,sigma=0.2 | 3.5e+00 | 1.0e-01 | 2.2e-01 | 6.2e-01 | 3.6e+01 |
| (r,K),r=0.1,sigma=0.7 | 3.5e+00 | 1.0e-01 | 7.4e-01 | 5.0e-01 | -3.0e+01 |
| (r,K),r=2,sigma=0.2 | 9.6e-03 | 4.1e-02 | 2.6e-01 | 4.4e-01 | 3.3e+01 |
| (r,K),r=2,sigma=0.7 | 2.0e-02 | 8.6e-02 | 6.5e-01 | 8.4e-01 | -1.7e+01 |
| (r,Kprim),r=0.1,sigma=0.2 | 3.2e-03 | 1.5e-02 | 8.1e-01 | 8.3e-01 | -3.0e+01 |
| (r,Kprim),r=0.1,sigma=0.7 | 1.0e-02 | 4.5e-02 | 1.3e+00 | 9.7e-01 | -6.0e+01 |
| (r,Kprim),r=2,sigma=0.2 | 1.4e-02 | 6.0e-02 | 2.3e+00 | 9.9e-01 | -1.0e+02 |
| (r,Kprim),r=2,sigma=0.7 | 2.1e-02 | 7.7e-02 | 2.6e+00 | 9.9e-01 | -1.1e+02 |
[theta-Ricker for the next session? ample material for a session 2…]
A nice lifelike example, including more complex models such as models with delays https://onlinelibrary.wiley.com/doi/full/10.1111/j.1600-0587.2009.05604.x
See also the very smart https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1002/ecs2.2215